Putting Ourselves Back in the Equation by George Musser

Putting Ourselves Back in the Equation by George Musser

Author:George Musser [Musser, George]
Language: eng
Format: epub
Publisher: Farrar, Straus and Giroux


FIRST PERSON FIRST

Müller developed his inside-out view of physics partly as a fourth response to Boltzmann brains. Unlike Aaronson, he argues that a Boltzmann brain may have some inner experience after all, and he asks us to consider what that experience would be like. To that end, he has sought a comprehensive solution to the inside/outside problem.

Müller bases his thinking on philosophical idealism: the proposition that reality is mental and the physical world is our construct. “I actually drop the idea of a fundamental world out there,” he said. “I say, let’s not assume there is one. Let’s hope that we get it later on as a consequence or prediction of the theory. Let’s start from a kind of solipsistic point of view.” Most modern philosophers, not to mention physicists, detest idealism; it strikes them as mystical to assume that reality is all in our heads. Idealism also suffers from a reverse version of the hard problem of consciousness: If you assign primacy to the mind, how do you recover the physical?

Fortunately, you don’t have to go along with Müller’s contentious proposition. You can adopt a weaker form of his approach in which you continue to assume there is a reality independent of us and seek to describe how our brains perceive that reality. Müller’s ideas then become a physicist’s version of neuroscientific theories of perception such as predictive coding.

On this view, the world we create in our heads is a means to an end: to predict what we’ll see next based on what we’ve seen before—the same problem that machine learning aims to solve. For his analysis, Müller considered an idealized machine-learning technique developed by the computer scientist Ray Solomonoff in the 1960s.31 You take all possible algorithms that perform a calculation, check whether they reproduce your observations so far, and keep those that do. Then you see what they predict for future data and take a weighted average of all their predictions, giving the shortest program the most weight since it offers the most parsimonious explanation of the data, in accordance with Occam’s razor.

Crucially, you don’t take just the simplest algorithm. Simple is often right, but not always, and it’s wise to keep your options open. Also, by keeping other algorithms in the mix, you obtain not just a single prediction, but a range of predictions with certain probabilities, known as algorithmic probability.

For instance, suppose you observe a string of computer bits: 11001001. There are multiple ways to interpret them. They look like a coin toss, in which case there’s a fifty-fifty chance the next bit will be 0. But those bits also happen to be the start of π in binary notation—so if the data really does represent π, the next bit will definitely be 0. A coin toss is the simplest algorithm, so you tentatively assign a fifty-fifty chance to 0 or 1. Then you nudge the odds slightly in favor of 0, to account for the possibility that the data represents π after all.

Mathematicians love this procedure for its purity.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.